AI Governance and the Value of Humans as Information Sources
AI Governance and the Value of Humans as Information Sources
解説 by gpt.icon
This memo contains some interesting reflections on AI, digital democracy, and happy governance. First, let us contrast the idea of an integrated technocracy with the position of digital democracy.
The integrated technocracy approach holds that if there is an AI that is smarter than humans, then governance should be delegated to that AI. Behind this theory is the belief that AI can make more rational and efficient decisions. However, there is one major problem with this approach. However, there is one major problem with this approach: the question of how AI understands and assigns value to up-to-date information and human subjective well-being.
From the perspective of digital democracy, we oppose this technocratic approach. AI, which takes human subjective well-being as its objective function, needs information to measure people's well-being, and a "broad listening" approach is an effective way to gather this information. Broad listening refers to gathering opinions and feelings from a wide range of people, and in this process, diverse perspectives and values are taken into account.
The "Today's Insights" described in the memo focus on the importance of individual uniqueness in the collection of information. For example, if the information of 10 people in a cluster has the same value as the information of another cluster of 1000 people, the majority will get a smaller per capita share than the minority if they pay according to their value as information. This implies attaching a disproportionately high value to the minority group if the value of the information is independent of the number of people. This idea raises important questions about how AI values information and makes decisions based on it.
Furthermore, the memo points to the tension between the principle of "equal human value" and the economic principle that it is the market that sets the price; how AI governance reflects human subjective values and makes efficient decisions while maintaining fairness is a central digital democracy Challenges.
This memo raises profound questions about the evolution of AI and its integration into society. How to incorporate human values, well-being, and equity into the AI decision-making process is an area for further study and discussion.
ver.1 書きかけのメモ by nishio.icon
Today's Reminders
The need for people similar to others in gathering that information is low.
When divided into clusters, one cluster A contains 10 people and another cluster B contains 1000 people.
From the AI's point of view, it has the same value as information.
If you pay according to its value as information, B gets a smaller share per capita.
If we assume that one person is of equal value, we would pay 100 times the cost of A to B. But from the AI's point of view, something that costs 100 times as much when the harvest is equal is not good.
Humans claim equal human value.
The market determines the price.
---
This page is auto-translated from /nishio/AI統治と情報源としての人間の価値 using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.